We introduce a new tool for stochastic convex optimization (SCO): a Reweighted Stochastic Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density. Combining ReSQue with recent advances in ball oracle acceleration [CJJJLST20, ACJJS21], we develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings. For a SCO objective constrained to the unit ball in $\mathbb{R}^d$, we obtain the following results (up to polylogarithmic factors). We give a parallel algorithm obtaining optimization error $\epsilon_{\text{opt}}$ with $d^{1/3}\epsilon_{\text{opt}}^{-2/3}$ gradient oracle query depth and $d^{1/3}\epsilon_{\text{opt}}^{-2/3} + \epsilon_{\text{opt}}^{-2}$ gradient queries in total, assuming access to a bounded-variance stochastic gradient estimator. For $\epsilon_{\text{opt}} \in [d^{-1}, d^{-1/4}]$, our algorithm matches the state-of-the-art oracle depth of [BJLLS19] while maintaining the optimal total work of stochastic gradient descent. We give an $(\epsilon_{\text{dp}}, \delta)$-differentially private algorithm which, given $n$ samples of Lipschitz loss functions, obtains near-optimal optimization error and makes $\min(n, n^2\epsilon_{\text{dp}}^2 d^{-1}) + \min(n^{4/3}\epsilon_{\text{dp}}^{1/3}, (nd)^{2/3}\epsilon_{\text{dp}}^{-1})$ queries to the gradients of these functions. In the regime $d \le n \epsilon_{\text{dp}}^{2}$, where privacy comes at no cost in terms of the optimal loss up to constants, our algorithm uses $n + (nd)^{2/3}\epsilon_{\text{dp}}^{-1}$ queries and improves recent advancements of [KLL21, AFKT21]. In the moderately low-dimensional setting $d \le \sqrt n \epsilon_{\text{dp}}^{3/2}$, our query complexity is near-linear.
translated by 谷歌翻译
Existing analyses of neural network training often operate under the unrealistic assumption of an extremely small learning rate. This lies in stark contrast to practical wisdom and empirical studies, such as the work of J. Cohen et al. (ICLR 2021), which exhibit startling new phenomena (the "edge of stability" or "unstable convergence") and potential benefits for generalization in the large learning rate regime. Despite a flurry of recent works on this topic, however, the latter effect is still poorly understood. In this paper, we take a step towards understanding genuinely non-convex training dynamics with large learning rates by performing a detailed analysis of gradient descent for simplified models of two-layer neural networks. For these models, we provably establish the edge of stability phenomenon and discover a sharp phase transition for the step size below which the neural network fails to learn "threshold-like" neurons (i.e., neurons with a non-zero first-layer bias). This elucidates one possible mechanism by which the edge of stability can in fact lead to better generalization, as threshold neurons are basic building blocks with useful inductive bias for many tasks.
translated by 谷歌翻译
Differentially private deep learning has recently witnessed advances in computational efficiency and privacy-utility trade-off. We explore whether further improvements along the two axes are possible and provide affirmative answers leveraging two instantiations of \emph{group-wise clipping}. To reduce the compute time overhead of private learning, we show that \emph{per-layer clipping}, where the gradient of each neural network layer is clipped separately, allows clipping to be performed in conjunction with backpropagation in differentially private optimization. This results in private learning that is as memory-efficient and almost as fast per training update as non-private learning for many workflows of interest. While per-layer clipping with constant thresholds tends to underperform standard flat clipping, per-layer clipping with adaptive thresholds matches or outperforms flat clipping under given training epoch constraints, hence attaining similar or better task performance within less wall time. To explore the limits of scaling (pretrained) models in differentially private deep learning, we privately fine-tune the 175 billion-parameter GPT-3. We bypass scaling challenges associated with clipping gradients that are distributed across multiple devices with \emph{per-device clipping} that clips the gradient of each model piece separately on its host device. Privately fine-tuning GPT-3 with per-device clipping achieves a task performance at $\epsilon=1$ better than what is attainable by non-privately fine-tuning the largest GPT-2 on a summarization task.
translated by 谷歌翻译
机器学习中的许多基本问题可以通过convex程序\ [\ min _ {\ theta \ in r^d} \ sum_ {i = 1}^{n} f_ {i}(\ theta),\]每个$ f_i $都是一个凸,Lipschitz函数在$ \ theta $的$ d_i $坐标的子集中支持。以随机梯度下降为例,解决此问题的一种常见方法涉及在每次迭代时对一个$ f_i $术语进行采样以取得进展。这种方法至关重要地依赖于$ f_i $的均匀性概念,该概念正式通过其状况编号捕获。在这项工作中,我们给出了一种将上述凸公式最小化为$ \ epsilon $ -Accuracy in $ \ widetilde {o}(\ sum_ {i = 1}^n d_i \ log(1 /\ epsilon)$计算,没有关于条件号的假设。以前的最佳算法独立于条件编号是标准切割平面方法,它需要$ o(nd \ log(1/\ epsilon))$渐变计算。作为推论,我们改善了Axiotis等人的评估甲骨文的复杂性,可分解性下的最小化。 (ICML 2021)。我们的主要技术贡献是一种自适应程序,可以通过切割平面和内点方法的新型组合在每次迭代中选择$ f_i $项。
translated by 谷歌翻译
我们提出了一个新的框架,用于对凸函数的差异私有优化,这些功能是任意规范$ \ normx {\ cdot} $中的Lipschitz。我们的算法基于一种正规的指数机制,该机制从密度$ \ propto \ exp(-k(f+\ mu r))$中进行样品,其中$ f $是经验损失,$ r $是一种常规化器,它与强烈的convex convex converize尊重$ \ normx {\ cdot} $,将\ cite {gll22}的最新作品推广到非Euclidean设置。我们表明,这种机制可以满足高斯差异隐私,并通过使用凸几何形状的本地化工具来解决DP-MER(经验风险最小化)和DP-SCO(随机凸优化)。我们的框架是第一个在一般规范空间中适用于私有凸优化的框架,并直接恢复了镜下下降的非私有SCO率,作为隐私参数$ \ eps \ to \ infty $。作为应用程序,对于LipsChitz优化了$ \ ell_p $ norms for(1,2)$中的所有$ p \ norms,我们获得了第一个最佳隐私性权衡权衡;对于$ p = 1 $,我们提高了最近的作品\ cite {asifkt21,bassilygn21}获得的权衡,至少通过对数因素。我们的$ \ ell_p $ norm和schatten- $ p $规范优化框架与多项式时间采样器相辅相成,我们的查询复杂性明确绑定。
translated by 谷歌翻译
大型预审慎的模型可以私下微调以实现非私有模型的性能。这些结果中的一个共同主题是令人惊讶的观察结果,即高维模型可以实现有利的隐私性权衡。这似乎与差异私有凸学习的模型尺寸依赖性相矛盾,并提出了以下研究问题:差异私人学习的性能何时不会随着模型大小的增加而降低?我们确定投影到子空间上的梯度的幅度是决定性能的关键因素。为了确切地为私人凸学习的特征,我们引入了一个条件,即我们将限制Lipschitz的连续性限制并得出了在其他条件下与维度无关的过多经验和人口风险的界限。我们从经验上表明,在大型语言模型的私人微调中,在本地最佳距离附近评估的梯度主要由一些主要组件控制。这种行为类似于我们在凸面设置中获得尺寸独立界限的条件。我们的理论和经验结果共同为大规模私人微调成功提供了可能的解释。
translated by 谷歌翻译
在本文中,我们研究了非平滑凸函数的私人优化问题$ f(x)= \ mathbb {e} _i f_i(x)$ on $ \ mathbb {r}^d $。我们表明,通过将$ \ ell_2^2 $正规器添加到$ f(x)$并从$ \ pi(x)\ propto \ exp(-k(f(x)+\ mu \ \ | | x \ | _2^2/2))$恢复已知的最佳经验风险和$(\ epsilon,\ delta)$ - dp的已知最佳经验风险和人口损失。此外,我们将展示如何使用$ \ widetilde {o}(n \ min(d,n))$ QUERIES $ QUERIES $ f_i(x)$用于DP-SCO,其中$ n $是示例数/用户和$ d $是环境维度。我们还在评估查询的数量上给出了一个(几乎)匹配的下限$ \ widetilde {\ omega}(n \ min(d,n))$。我们的结果利用以下具有独立感兴趣的工具:(1)如果损失函数强烈凸出并且扰动是Lipschitz,则证明指数机制的高斯差异隐私(GDP)。我们的隐私约束是\ emph {optimal},因为它包括高斯机制的隐私性,并使用等仪不等式证明了强烈的对数concove措施。 (2)我们展示如何从$ \ exp(-f(x) - \ mu \ | x \ | |^2_2/2)$ g $ -lipschitz $ f $带有$ \ eta $的总变化中的错误(电视)使用$ \ widetilde {o}((g^2/\ mu)\ log^2(d/\ eta))$无偏查询到$ f(x)$。这是第一个在dimension $ d $和精度$ \ eta $上具有\ emph {polylogarithmic依赖的查询复杂性的采样器。
translated by 谷歌翻译
我们为大规模训练的大规模训练语言模型提供了更简单,更稀疏,更快的算法,这些算法在许多标准的NLP任务上实现了最新的隐私与实用性权衡。我们为此问题提出了一个元框架,这是受高度参数效率方法进行微调成功的启发。我们的实验表明,这些方法的差异化适应能力在三个重要方面优于以前的私人算法:实用程序,隐私以及私人培训的计算和记忆成本。在许多经常研究的数据集中,私人模型的实用性接近了非私人模型的方法。例如,在MNLI数据集上,我们使用Roberta-large的准确度为87.8 \%$,使用Roberta-Base $ 83.5 \%$,其隐私预算为$ \ Epsilon = 6.7 $。相比之下,缺乏隐私限制,罗伯塔·莱格(Roberta-Large)的准确度为$ 90.2 \%$。我们的发现对于自然语言生成任务类似。与DART,GPT-2-SMALL,GPT-2中,GPT-2-MEDIUM,GPT-2-LARGE和GPT-2-XL的私人微调达到38.5、42.0、43.1和43.8($ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 43.8) epsilon = 6.8,\ delta = $ 1E-5),而非私人基线为$ 48.1 $。我们所有的实验都表明,较大的模型更适合私人微调:虽然众所周知,它们旨在非优先实现卓越的准确性,但我们发现当引入隐私时,它们也更好地保持其准确性。
translated by 谷歌翻译
在生成对抗网络(GAN)中操纵潜在代码的面部图像合成主要集中于连续属性合成(例如,年龄,姿势和情感),而离散属性合成(例如面膜和眼镜)受到较少的注意。直接将现有作品应用于面部离散属性可能会导致结果不正确。在这项工作中,我们提出了一个创新的框架,以通过语义分解,称为SD-GAN来解决具有挑战性的面部离散属性合成。要具体,我们将离散属性表示形式明确分解为两个组件,即语义先验和偏移潜在表示。语义先验基础显示了在潜在空间中操纵面部表示的初始化方向。提出了通过3D感知语义融合网络获得的偏移潜在呈现,以调整先前的基础。此外,融合网络集成了3D嵌入,以更好地身份保存和离散属性合成。先前基础和抵消潜在表示的组合使我们的方法能够合成具有离散属性的照片真实面部图像。值得注意的是,我们构建了一个大型且有价值的数据集MEGN(从Google和Naver捕获的面膜和眼镜图像),以完成现有数据集中缺乏离散属性。广泛的定性和定量实验证明了我们方法的最新性能。我们的代码可在以下网址找到:https://github.com/montaellis/sd-gan。
translated by 谷歌翻译
视觉预读(VLP)模型最近成功地促进了许多跨模式下游任务。大多数现有作品通过比较微调的下游任务性能来评估其系统。但是,只有平均下游任务准确性才能提供有关每种VLP方法的优缺点的几乎没有信息,更不用说有关社区如何改善系统的见解。受清单进行自然语言处理的启发,我们引入了VL-CheckList,这是一个新颖的框架,以了解VLP模型的功能。所提出的方法将VLP模型的图像定位能力分为三类:对象,属性和关系,并使用新颖的分类法进一步分解这三个方面。我们进行了全面的研究,通过提出的框架分析了七个最近流行的VLP模型。结果通过揭示了仅在下游任务评估中看不见的模型之间的细粒度差异来证实所提出的方法的有效性。进一步的结果表明,在构建更好的VLP模型方面有希望的研究方向。数据和代码:https://github.com/om--ai-lab/vl-checklist
translated by 谷歌翻译